24 research outputs found

    Providing a New Approach for Modeling and Parameter Estimation of Probability Density Function of Noise in Digital Images

    Get PDF
    The main part of the noise in digital images arises when taking pictures or transmission. There is noise in the imagescaptured by the image sensors of the real world. Noise, based on its causes can have different probability density functions.For example, such a model is called the Poisson distribution function of the random nature of photon arrival process that isconsistent with the distribution of pixel values measured. The parameters of the noise probability density function (PDF)can be achieved to some extent the properties of the sensor. But, we need to estimate the parameters for imaging settings. Ifwe assume that the PDF of noise is approximately Gaussian, then we need only to estimate the mean and variance becausethe Gaussian PDF with only two parameters is determined. In fact, in many cases, PDF of noise is not Gaussian and it hasunknown distribution. In this study, we introduce a generalized probability density function for modeling noise in imagesand propose a method to estimate its parameters. Because the generalized probability density function has multipleparameters, so use common parameter estimation techniques such as derivative method to maximize the likelihood functionwould be extremely difficult. In this study, we propose the use of evolutionary algorithms for global optimization. Theresults show that this method accurately estimates the probability density function parameters

    Prediction of stroke probability occurrence based on fuzzy cognitive maps

    Get PDF
    Among neurological patients, stroke is the most common cause of mortality. It is a health problem that is very costly all over the world. Therefore, the mortality due to the disease can be reduced by identifying and modifying the risk factors. Controllable factors which are contributing to stroke including hypertension, diabetes, heart disease, hyperlipidemia, smoking, and obesity. Therefore, by identifying and controlling the risk factors, stroke can be prevented and the effects of this disease could be reduced to a minimum. Therefore, for the quick and timely diagnosis of the disease, we need an intelligent system to predict the stroke risk. In this paper, a method has been proposed for predicting the risk rate of stroke which is based on fuzzy cognitive maps and nonlinear Hebbian learning algorithm. The accuracy of the proposed NHL-FCM model is tested using 15-fold cross-validation, for 90 actual cases, and compared with those of support vector machine and k-nearest neighbours. The proposed method shows superior performance with a total accuracy of (95.4 ± 7.5)%

    Automatic sleep apnea detection using fuzzy logic

    Get PDF
    The obstructive sleep apnea (OSA) is one of the most important sleep disorders characterized by obstruction of the respiratory tract and cessation in respiratory flow level. Currently, apnea diagnosis is mainly based on the Polysomnography (PSG) testing during sleeping hours, however, recording the entire signals during nights is a very costly, time-consuming and difficult task. The goal of this study is to provide and validate an automatic algorithm to analyze four PSG-recordings and detect the occurrence of sleep apnea by noninvasive features. Four PSG signals were extracted from oxygen saturation (SaO2), Transitional air flow (Air Flow), abdominal movements during breathing (Abdomen mov.) and movements of the chest (Thoracic mov.). We describe a fuzzy algorithm to compensate the imprecise information about the range of signal loss, regarding the expert opinions. Signal classification is implemented minute-by-minute and for 30 labeled samples of MIT/BIH data sets (acquired from PhysioNet). The obtained data from 18 apnea subjects (11 males and 7 females, mean age 43 years) were categorized in three output signals of apnea, hypopnea and normal breathing. The proposed algorithm shows proficiency in diagnosing OSA with acceptable sensitivity and specificity, respectively 86% and 87%

    Computer Implementation of a New Therapeutic Model for GBM Tumor

    Get PDF
    Modeling the tumor behavior in the host organ as function of time and radiation dose has been a major study in the previous decades. Here the effort in estimation of cancerous and normal cell proliferation and growth in glioblastoma multiform (GBM) tumor is presented. This paper introduces a new mathematical model in the form of differential equation of tumor growth. The model contains dose delivery amount in the treatment scheme as an input term. It also can be utilized to optimize the treatment process in order to increase the patient survival period. Gene expression programming (GEP) as a new concept is used for estimating this model. The LQ model has also been applied to GEP as an initial value, causing acceleration and improvement of the algorithm estimation. The model shows the number of the tumor and normal brain cells during the treatment process using the status of normal and cancerous cells in the initiation of treatment, the timing and amount of dose delivery to the patient, and a coefficient that describes the brain condition. A critical level is defined for normal cell when the patient’s death occurs. In the end the model has been verified by clinical data obtained from previous accepted formulae and some of our experimental resources. The proposed model helps to predict tumor growth during treatment process in which further treatment processes can be controlled

    A Review on EEG Signals Based Emotion Recognition

    Get PDF
    Emotion recognition has become a very controversial issue in brain-computer interfaces (BCIs). Moreover, numerous studies have been conducted in order to recognize emotions. Also, there are several important definitions and theories about human emotions. In this paper we try to cover important topics related to the field of emotion recognition. We review several studies which are based on analyzing electroencephalogram (EEG) signals as a biological marker in emotion changes. Considering low cost, good time and spatial resolution, EEG has become very common and is widely used in most BCI applications and studies. First, we state some theories and basic definitions related to emotions. Then some important steps of an emotion recognition system like different kinds of biologic measurements (EEG, electrocardiogram [EEG], respiration rate, etc), offline vs online recognition methods, emotion stimulation types and common emotion models are described. Finally, the recent and most important studies are reviewed

    Emotion Classification through Nonlinear EEG Analysis Using Machine Learning Methods

    Get PDF
    Background: Emotion recognition, as a subset of affective computing, has received considerable attention in recent years. Emotions are key to human-computer interactions. Electroencephalogram (EEG) is considered a valuable physiological source of information for classifying emotions. However, it has complex and chaotic behavior.Methods: In this study, an attempt is made to extract important nonlinear features from EEGs with the aim of emotion recognition. We also take advantage of machine learning methods such as evolutionary feature selection methods and committee machines to enhance the classification performance. Classification performed concerning both arousal and valence factors.Results: Results suggest that the proposed method is successful and comparable to the previous works. A recognition rate equal to 90% achieved, and the most significant features reported. We apply the final classification scheme to 2 different databases including our recorded EEGs and a benchmark dataset to evaluate the suggested approach.Conclusion: Our findings approve of the effectiveness of using nonlinear features and a combination of classifiers. Results are also discussed from different points of view to understand brain dynamics better while emotion changes. This study reveals useful insights about emotion classification and brain-behavior related to emotion elicitation

    Providing a New Approach for Modeling and Parameter Estimation of Probability Density Function of Noise in Digital Images

    Get PDF
    The main part of the noise in digital images arises when taking pictures or transmission. There is noise in the images captured by the image sensors of the real world. Noise, based on its causes can have different probability density functions. For example, such a model is called the Poisson distribution function of the random nature of photon arrival process that is consistent with the distribution of pixel values measured. The parameters of the noise probability density function (PDF) can be achieved to some extent the properties of the sensor. But, we need to estimate the parameters for imaging settings. If we assume that the PDF of noise is approximately Gaussian, then we need only to estimate the mean and variance because the Gaussian PDF with only two parameters is determined. In fact, in many cases, PDF of noise is not Gaussian and it has unknown distribution. In this study, we introduce a generalized probability density function for modeling noise in images and propose a method to estimate its parameters. Because the generalized probability density function has multiple parameters, so use common parameter estimation techniques such as derivative method to maximize the likelihood function would be extremely difficult. In this study, we propose the use of evolutionary algorithms for global optimization. The results show that this method accurately estimates the probability density function parameters

    Estimation and evaluation of pseudo-CT images using linear regression models and texture feature extraction from MRI images in the brain region to design external radiotherapy planning

    Get PDF
    AimThe aim of this study is to construct and evaluate Pseudo-CT images (P-CTs) for electron density calculation to facilitate external radiotherapy treatment planning.BackgroundDespite numerous benefits, computed tomography (CT) scan does not provide accurate information on soft tissue contrast, which often makes it difficult to precisely differentiate target tissues from the organs at risk and determine the tumor volume. Therefore, MRI imaging can reduce the variability of results when registering with a CT scan.Materials and methodsIn this research, a fuzzy clustering algorithm was used to segment images into different tissues, also linear regression methods were used to design the regression model based on the feature extraction method and the brightness intensity values. The results of the proposed algorithm for dose-volume histogram (DVH), Isodose curves, and gamma analysis were investigated using the RayPlan treatment planning system, and VeriSoft software. Furthermore, various statistical indices such as Mean Absolute Error (MAE), Mean Error (ME), and Structural Similarity Index (SSIM) were calculated.ResultsThe MAE of a range of 45–55 was found from the proposed methods. The relative difference error between the PTV region of the CT and the Pseudo-CT was 0.5, and the best gamma rate was 95.4% based on the polar coordinate feature and proposed polynomial regression model.ConclusionThe proposed method could support the generation of P-CT data for different parts of the brain region from a collection of MRI series with an acceptable average error rate by different evaluation criteria
    corecore